Online efficient learning with quantized KLMS and L1 regularization

نویسندگان

  • Badong Chen
  • Songlin Zhao
  • Sohan Seth
  • José Carlos Príncipe
چکیده

In a recent work, we have proposed the quantized kernel least mean square (QKLMS) algorithm, which is quite effective in online learning sequentially a nonlinear mapping with a slowly growing radial basis function (RBF) structure. In this paper, in order to further reduce the network size, we propose a sparse QKLMS algorithm, which is derived by adding a sparsity inducing 1 l norm penalty of the coefficients to the squared error cost. Simulation examples show that the new algorithm works efficiently, and results in a much sparser network while preserving a desirable performance. Keywords-online learning, kernel adaptive filtering, QKLMS, 1 l norm penalty.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Efficient Online and Batch Learning Using Forward Backward Splitting

We describe, analyze, and experiment with a framework for empirical loss minimization with regularization. Our algorithmic framework alternates between two phases. On each iteration we first perform an unconstrained gradient descent step. We then cast and solve an instantaneous optimization problem that trades off minimization of a regularization term while keeping close proximity to the result...

متن کامل

Kernel Least Mean Square Algorithm

A simple, yet powerful, learning method is presented by combining the famed kernel trick and the least-mean-square (LMS) algorithm, called the KLMS. General properties of the KLMS algorithm are demonstrated regarding its well-posedness in very high dimensional spaces using Tikhonov regularization theory. An experiment is studied to support our conclusion that the KLMS algorithm can be readily u...

متن کامل

Kernel Based Learning for Nonlinear System Identification

In this paper, an efficient Kernel based algorithm is developed with application in nonlinear system identification. Kernel adaptive filters are famous for their universal approximation property with Gaussian kernel, and online learning capabilities. The proposed adaptive step-size KLMS (ASS-KLMS) algorithm can exhibit universal approximation capability, irrespective of the choice of reproducin...

متن کامل

Efficient Learning using Forward-Backward Splitting

We describe, analyze, and experiment with a new framework for empirical loss minimization with regularization. Our algorithmic framework alternates between two phases. On each iteration we first perform an unconstrained gradient descent step. We then cast and solve an instantaneous optimization problem that trades off minimization of a regularization term while keeping close proximity to the re...

متن کامل

L1 Regularized Linear Temporal Difference Learning

Several recent efforts in the field of reinforcement learning have focused attention on the importance of regularization, but the techniques for incorporating regularization into reinforcement learning algorithms, and the effects of these changes upon the convergence of these algorithms, are ongoing areas of research. In particular, little has been written about the use of regularization in onl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012